首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5492篇
  免费   445篇
  国内免费   471篇
工业技术   6408篇
  2024年   4篇
  2023年   26篇
  2022年   49篇
  2021年   76篇
  2020年   95篇
  2019年   77篇
  2018年   58篇
  2017年   138篇
  2016年   167篇
  2015年   219篇
  2014年   282篇
  2013年   268篇
  2012年   370篇
  2011年   430篇
  2010年   312篇
  2009年   344篇
  2008年   373篇
  2007年   400篇
  2006年   417篇
  2005年   391篇
  2004年   340篇
  2003年   293篇
  2002年   216篇
  2001年   141篇
  2000年   189篇
  1999年   143篇
  1998年   106篇
  1997年   95篇
  1996年   66篇
  1995年   78篇
  1994年   83篇
  1993年   48篇
  1992年   28篇
  1991年   22篇
  1990年   12篇
  1989年   4篇
  1988年   10篇
  1986年   2篇
  1985年   5篇
  1984年   3篇
  1983年   4篇
  1982年   3篇
  1981年   7篇
  1980年   5篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   3篇
排序方式: 共有6408条查询结果,搜索用时 109 毫秒
1.
由于命名数据网络(NDN,Named-Data Networking)无环路、逐包、逐跳转发的特点,使得数据包回传成功率降低,而传统的TCP/IP协议中的ARQ和ACK机制对于多播会话不再适用.由于NDN中的传输信道可以等效为二进制删除信道,因此可以通过应用层编码来实现文件的可靠传输.传统的信道编码技术如卷积码、级联码和RS码等复杂度较高,而将NDN与低复杂度的喷泉码的结合可以实现分布式的存储架构,因而可通过喷泉编码在应用层协议中实现可靠的纠删机制,保证整体文件的传输可靠性.以往的研究一般是基于确定的删除概率信道模型,但是由于网络的异构性和信道噪声等因素影响,可能会造成信道丢包概率呈随机性分布.因此,本文在Beta-Binomial分布模型的前提下,根据贝叶斯统计的先验信息和中心极限定理,对随机概率下的纠删信道的文件可靠传输协议进行了数学建模和理论推导.仿真结果显示此模型更具有普适性,此传输协议可在信道状况未知的前提下从理论上求出最小发包数,减少冗余编码包,提高文件整体的投递成功率,在保证传输可靠性的同时有效提升协议传输效率.  相似文献   
2.
本文提出一种高性能通用DSP扩展寄存器的设计及实现方法,该方法是我国自主研发的高性能通用DSP中实现寄存器堆扩展的一种新方法,其优点是在不影响现有指令集及指令机器码位宽的前提下,实现对处理器内部寄存器堆的成比例扩展。通过在我国自主研制DSP上的实际应用,证明了该扩展方法的有效性和实用性。  相似文献   
3.
针对MooseFS元数据节点的单点故障问题,对MooseFS源码进行改造,为其增加一个热备的备元数据节点,并通过主备之间同步元数据,备节点回放操作日志等技术使得备元数据节点内存中的元数据时刻跟主元数据节点保持一致。当主元数据节点发生故障切换到备元数据节点后,备元数据节点无需从本地加载元数据即可快速接替主元数据节点对外提供服务。测试结果表明,备元数据节点本地和内存中的元数据均与主元数据节点保持一致,且故障恢复时间小于1 s。  相似文献   
4.
With the rapid development in business transactions, especially in recent years, it has become necessary to develop different mechanisms to trace business user records in web server log in an efficient way. Online business transactions have increased, especially when the user or customer cannot obtain the required service. For example, with the spread of the epidemic Coronavirus (COVID-19) throughout the world, there is a dire need to rely more on online business processes. In order to improve the efficiency and performance of E-business structure, a web server log must be well utilized to have the ability to trace and record infinite user transactions. This paper proposes an event stream mechanism based on formula patterns to enhance business processes and record all user activities in a structured log file. Each user activity is recorded with a set of tracing parameters that can predict the behavior of the user in business operations. The experimental results are conducted by applying clustering-based classification algorithms on two different datasets; namely, Online Shoppers Purchasing Intention and Instacart Market Basket Analysis. The clustering process is used to group related objects into the same cluster, then the classification process measures the predicted classes of clustered objects. The experimental results record provable accuracy in predicting user preferences on both datasets.  相似文献   
5.
Digestograms of 101 published in vitro starch digestion were used to investigate slope discontinuities. Polynomial equations (1 to 3 orders) adequately described the first derivative of the digestograms. The derivative(s) of the equations revealed critical point(s). The third-order equation described (≤ 0.05) 17% of the digestograms as triphasic, the second- and third-order equations identified (P ≤ 0.05) 32% as biphasic, while 51% exhibited (P ≤ 0.05) monophasic digestograms. Using nonlinear regression with practical constraints, a modified first-order kinetic model (Dt = D0 + D∞−0 {1 − exp [−K t]}) described (r2 > 0.56, P ≤ 0.05) segments 1–3 of the digestograms. Rapid-slow and slow-rapid digestion rates were obtained, and maximum digestible starches, D, ≤100g/100g (dry) starch for an in-depth understanding of starch digestion. This is the first comprehensive objective approach for slope discontinuities in starch digestograms for consistency in modelling digestograms that advances starch digestion studies.  相似文献   
6.
1-read/1-write (1R1W) register file (RF) is a popular memory configuration in modern feature rich SoCs requiring significant amount of embedded memory. A memory compiler is constructed using the 8T RF bitcell spanning a range of instances from 32 b to 72 Kb. An 8T low-leakage bitcell of 0.106 μm2 is used in a 14 nm FinFET technology with a 70 nm contacted gate pitch for high-density (HD) two-port (TP) RF memory compiler which achieves 5.66 Mb/mm2 array density for a 72 Kb array which is the highest reported density in 14 nm FinFET technology. The density improvement is achieved by using techniques such as leaf-cell optimization (eliminating transistors), better architectural planning, top level connectivity through leaf-cell abutment and minimizing the number of unique leaf-cells. These techniques are fully compatible with memory compiler usage over the required span. Leakage power is minimized by using power-switches without degrading the density mentioned above. Self-induced supply voltage collapse technique is applied for write and a four stack static keeper is used for read Vmin improvement. Fabricated test chips using 14 nm process have demonstrated 2.33 GHz performance at 1.1 V/25 °C operation. Overall Vmin of 550 mV is achieved with this design at 25 °C. The inbuilt power-switch improves leakage power by 12x in simulation. Approximately 8% die area of a leading 14 nm SoC in commercialization is occupied by these compiled RF instances.  相似文献   
7.
安全管理平台(SMP)是实现安全管理工作常态化运行的技术支撑平台,在实际应用中需要实时处理来自安全设备所产生的海量日志信息。为解决现有SMP中海量日志查询效率低下的问题,设计基于云计算的SMP日志存储分析系统。基于Hive的任务转化模式,利用Hadoop架构的分布式文件系统和MapReduce并行编程模型,实现海量SMP日志的有效存储与查询。实验结果表明,与基于关系数据的多表关联查询方法相比,该系统使得SMP日志的平均查询效率提高约90%,并能加快SMP集中管控的整体响应速度。  相似文献   
8.
Nowadays, the rapid development of the internet calls for a high performance file system, and a lot of efforts have already been devoted to the issue of assigning nonpartitioned files in a parallel file system with the aim of pursuing a prompt response to requests. Yet most of the existing strategies still fail to bring about an optimal performance on system mean response time metrics, and new strategies which can achieve better performance in terms of mean response time become indispensable for parallel file systems. This paper, while addressing the issue of assigning nonpartitioned files in parallel file systems where the file accesses exhibit Poisson arrival rates and fixed service times, presents an on-line file assignment strategy, named prediction-based dynamic file assignment (PDFA), to minimize the mean response time among disks under different workload conditions, and a comparison of the PDFA with the well-known file assignment algorithms, such as HP and SOR. Comprehensive experimental results show that PDFA is able to improve the performance consistently in terms of mean response time among all algorithms for comparison.  相似文献   
9.
A log statement is one of the key tactics for a developer to record and monitor important run-time behaviors of our system in a development phase and a maintenance phase. It composes of a message for stating log contents, and a log level (eg, debug or warn) to denote the severity of a message and controlling its visibility at run time. In spite of its usefulness, a developer does not tend to deeply consider which log level is appropriate in writing source code, which causes the system to be unmaintainable. To address this issue, this paper proposes an automatic approach to validating the appropriateness of the log level in consideration of the semantic and syntactic features and recommending a proper alternative log level. We first build the semantic feature vector to quantify the semantic similarity among application log messages using the word vector space, and the syntactic feature vector to capture the application context that surrounds the log statement. Based on the feature vectors and machine learning techniques, the log level is automatically validated, and an alternative log level is recommended if the log level is invalid. For the evaluation, we collected 22 open-source projects from three application domains, and obtained the 77% of precision and 75% of recall in validating the log levels. Also, our approach showed 6% higher accuracy than that of the developer group who has 7 to 8 years of work experience, and 72% of the developers accepted our recommendation.  相似文献   
10.
In this explorative study, we investigate how sequences of behaviour are related to success or failure in complex problem-solving (CPS). To this end, we analysed log data from two different tasks of the problem-solving assessment of the Programme for International Student Assessment 2012 study (n = 30,098 students). We first coded every interaction of students as (initial or repeated) exploration, (initial or repeated) goal-directed behaviour, or resetting the task. We then split the data according to task successes and failures. We used full-path sequence analysis to identify groups of students with similar behavioural patterns in the respective tasks. Double-checking and minimalistic behaviour was associated with success in CPS, while guessing and exploring task-irrelevant content was associated with failure. Our findings held for both tasks investigated, from two different CPS measurement frameworks. We thus gained detailed insight into the behavioural processes that are related to success and failure in CPS.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号